8 research outputs found

    Adaptive resource optimization for edge inference with goal-oriented communications

    Get PDF
    AbstractGoal-oriented communications represent an emerging paradigm for efficient and reliable learning at the wireless edge, where only the information relevant for the specific learning task is transmitted to perform inference and/or training. The aim of this paper is to introduce a novel system design and algorithmic framework to enable goal-oriented communications. Specifically, inspired by the information bottleneck principle and targeting an image classification task, we dynamically change the size of the data to be transmitted by exploiting banks of convolutional encoders at the device in order to extract meaningful and parsimonious data features in a totally adaptive and goal-oriented fashion. Exploiting knowledge of the system conditions, such as the channel state and the computation load, such features are dynamically transmitted to an edge server that takes the final decision, based on a proper convolutional classifier. Hinging on Lyapunov stochastic optimization, we devise a novel algorithmic framework that dynamically and jointly optimizes communication, computation, and the convolutional encoder classifier, in order to strike a desired trade-off between energy, latency, and accuracy of the edge learning task. Several simulation results illustrate the effectiveness of the proposed strategy for edge learning with goal-oriented communications

    Goal-oriented Communications for the IoT: System Design and Adaptive Resource Optimization

    Full text link
    Internet of Things (IoT) applications combine sensing, wireless communication, intelligence, and actuation, enabling the interaction among heterogeneous devices that collect and process considerable amounts of data. However, the effectiveness of IoT applications needs to face the limitation of available resources, including spectrum, energy, computing, learning and inference capabilities. This paper challenges the prevailing approach to IoT communication, which prioritizes the usage of resources in order to guarantee perfect recovery, at the bit level, of the data transmitted by the sensors to the central unit. We propose a novel approach, called goal-oriented (GO) IoT system design, that transcends traditional bit-related metrics and focuses directly on the fulfillment of the goal motivating the exchange of data. The improvement is then achieved through a comprehensive system optimization, integrating sensing, communication, computation, learning, and control. We provide numerical results demonstrating the practical applications of our methodology in compelling use cases such as edge inference, cooperative sensing, and federated learning. These examples highlight the effectiveness and real-world implications of our proposed approach, with the potential to revolutionize IoT systems.Comment: Accepted for publication on IEEE Internet of Things Magazine, special issue on "Task-Oriented Communications and Networking for the Internet of Things

    Maximum Upward Planar Subgraphs of Embedded Planar Digraphs

    Get PDF
    This paper presents an extensive study on the problem of computing maximum upward planar subgraphs of embedded planar digraphs: Complexity results, algorithms, and experiments are presented. Namely: (i)(i) We prove that the addressed problem is NP-Hard; (ii)(ii) A fast heuristic and an exponential-time exact algorithm are described; (iii)(iii) A wide experimental analysis is performed to show the effectiveness of our techniques

    Multi-user goal-oriented communications with energy-efficient edge resource management

    No full text
    Edge Learning (EL) pushes the computational resources toward the edge of 5G/6G network to assist mobile users requesting delay-sensitive and energy-aware intelligent services. A common challenge in running inference tasks from remote is to extract and transmit only the features that are most significant for the inference task. From this perspective, EL can be effectively coupled with goal-oriented communications, whose aim is to transmit only the information {\it relevant} to perform the inference task, under prescribed accuracy, delay, and energy constraints. In this work, we consider a multi-user/single server wireless network, where the users can opportunistically decide whether to perform the inference task by themselves or, alternatively, to offload the data to the edge server for remote processing. The data to be transmitted undergoes a goal-oriented compression stage performed using a convolutional encoder, jointly trained with a convolutional decoder running at the edge-server side. Employing Lyapunov optimization, we propose a method to jointly and dynamically optimize the selection of the most suitable encoding/decoding scheme, together with the allocation of computational and transmission resources, across all the users and the edge server. Extensive simulations confirm the effectiveness of the proposed approaches and highlight the trade-offs between energy, latency, and learning accuracy

    Dynamic Resource Allocation for Multi-User Goal-oriented Communications at the Wireless Edge

    No full text
    This paper proposes a wireless, goal-oriented, multi-user communication system assisted by edge-computing, within the general framework of Edge Machine Learning (EML). Specifically, we consider a set of mobile devices that, exploiting convolutional encoders (CE), namely the encoder part of the convolutional auto-encoders (CAE), send compressed data units to an edge server (ES) that performs a specific learning task, such as image classification. The training of both the CEs and the ES classification networks is performed in a off-line fashion, employing a cross-entropy loss, regularized by the mean squared error of the CAE expanded output. Then, exploiting such goal-oriented architecture, and employing a Lyapunov optimization framework, we considered the joint management of computation and transmission resources for the overall system. In partic-ular, we considered a Multi-User Minimum Energy Resource Allocation Strategy (mu-MERAS), which provides the optimal resource allocation for both the devices and the ES, in a energy-efficient perspective. Simulation results highlight a classical EML trade-off between energy, latency, and accuracy, as well as the effectiveness of the proposed approach to adaptively manage resources according to wireless channels conditions, computing requests, and classification reliability

    Opportunistic Information-Bottleneck for Goal-Oriented Feature Extraction and Communication

    No full text
    The Information Bottleneck (IB) method is an information theoretical framework to design a parsimonious and tunable feature-extraction mechanism, such that the extracted features are maximally relevant to a specific learning or inference task. Despite its theoretical value, the IB is based on a functional optimization problem that admits a closed form solution only on specific cases (e.g., Gaussian distributions), making it difficult to be applied in most applications, where it is necessary to resort to complex and approximated variational implementations. To overcome this limitation, we propose an approach to adapt the closed-form solution of the Gaussian IB to a general task. Whichever is the inference task to be performed by a (possibly deep) neural-network, the key idea is to opportunistically design a regression sub-task, embedded in the original problem, where we can safely assume a (joint) multivariate normality between the sub-task’s inputs and outputs. In this way we can exploit a fixed and pre-trained neural network to process the input data, using a tunable number of features, to trade data-size and complexity for accuracy. This approach is particularly useful every time a device needs to transmit data (or features) to a server that has to fulfil an inference task, as it provides a principled way to extract the most relevant features for the task to be executed, while looking for the best trade-off between the size of the feature vector to be transmitted, inference accuracy, and complexity. Extensive simulation results testify the effectiveness of the proposed method and encourage to further investigate this research line

    Homothetic triangle contact representations of planar graphs

    No full text
    In this paper we study the problem of computing homothetic triangle contact representations of planar graphs. Since not all planar graphs admit such a representation, we concentrate on meaningful subfamilies of planar graphs and prove that: (i) every two-terminal seriesparallel digraph has a homothetic triangle contact representation, which can be computed in linear time; (ii) every partial planar 3-tree admits a homothetic triangle contact representation.

    Homothetic triangle contact representations of planar graphs

    No full text
    In this paper we study the problem of computing homothetic triangle contact representations of planar graphs. Since not all planar graphs admit such a representation, we concentrate on meaningful subfamilies of planar graphs and prove that: (i) every two-terminal seriesparallel digraph has a homothetic triangle contact representation, which can be computed in linear time; (ii) every partial planar 3-tree admits a homothetic triangle contact representation.
    corecore